Learning Deep ResNet Blocks Sequentially using Boosting Theory
نویسندگان
چکیده
Deep neural networks are known to be difficult to train due to the instability of back-propagation. A deep residual network (ResNet) with identity loops remedies this by stabilizing gradient computations. We prove a boosting theory for the ResNet architecture. We construct T weak module classifiers, each contains two of the T layers, such that the combined strong learner is a ResNet. Therefore, we introduce an alternative Deep ResNet training algorithm, BoostResNet, which is particularly suitable in non-differentiable architectures. Our proposed algorithm merely requires a sequential training of T “shallow ResNets” which are inexpensive. We prove that the training error decays exponentially with the depth T if the weak module classifiers that we train perform slightly better than some weak baseline. In other words, we propose a weak learning condition and prove a boosting theory for ResNet under the weak learning condition. Our results apply to general multi-class ResNets. A generalization error bound based on margin theory is proved and suggests ResNet’s resistant to overfitting under network with l1 norm bounded weights.
منابع مشابه
Learning Deep Resnet Blocks Sequentially
We prove a multiclass boosting theory for the ResNet architectures which simultaneously creates a new technique for multiclass boosting and provides a new algorithm for ResNet-style architectures. Our proposed training algorithm, BoostResNet, is particularly suitable in non-differentiable architectures. Our method only requires the relatively inexpensive sequential training of T “shallow ResNet...
متن کاملFunctional Gradient Boosting based on Residual Network Perception
Residual Networks (ResNets) have become stateof-the-art models in deep learning and several theoretical studies have been devoted to understanding why ResNet works so well. One attractive viewpoint on ResNet is that it is optimizing the risk in a functional space by combining an ensemble of effective features. In this paper, we adopt this viewpoint to construct a new gradient boosting method, w...
متن کاملIdentity Mappings in Deep Residual Networks
Deep residual networks [1] have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connec...
متن کاملDeep Residual Learning for Weakly-Supervised Relation Extraction
Deep residual learning (ResNet) (He et al., 2016) is a new method for training very deep neural networks using identity mapping for shortcut connections. ResNet has won the ImageNet ILSVRC 2015 classification task, and achieved state-of-theart performances in many computer vision tasks. However, the effect of residual learning on noisy natural language processing tasks is still not well underst...
متن کاملNotes: A Continuous Model of Neural Networks. Part I: Residual Networks
Based on a natural connection between ResNet and transport equation or its characteristic equation, we propose a continuous flow model for both ResNet and plain net. Through this continuous model, a ResNet can be explicitly constructed as a refinement of a plain net. The flow model provides an alternative perspective to understand phenomena in deep neural networks, such as why it is necessary a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1706.04964 شماره
صفحات -
تاریخ انتشار 2017